-
Notifications
You must be signed in to change notification settings - Fork 8
Support arbitrary number of feature dimensions in latent to discrete transformation #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support arbitrary number of feature dimensions in latent to discrete transformation #26
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @VolodyaCO. Don't forget to add a an entry to the release-log.
|
@thisac I don't understand very well the reno process for release notes. Is every PR intended to create a new release? |
|
@thisac nvm, I just read "All of the list items in this section are combined when the release notes are rendered" |
7dd9d16 to
4119f2d
Compare
4119f2d to
a3ec9ec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much Vlad for the nice PR. I've added a few minor comments.
673a581 to
3b81ea8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with some very minor feedback
| @parameterized.expand( | ||
| [ | ||
| (1, torch.tensor([[[1.0, 1.0]], [[1.0, -1.0]], [[-1.0, -1.0]], [[-1.0, 1.0]]])), | ||
| ( | ||
| 5, | ||
| torch.tensor( | ||
| [[[1.0, 1.0]] * 5, [[1.0, -1.0]] * 5, [[-1.0, -1.0]] * 5, [[-1.0, 1.0]] * 5] | ||
| ), | ||
| ), | ||
| ] | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| @parameterized.expand( | |
| [ | |
| (1, torch.tensor([[[1.0, 1.0]], [[1.0, -1.0]], [[-1.0, -1.0]], [[-1.0, 1.0]]])), | |
| ( | |
| 5, | |
| torch.tensor( | |
| [[[1.0, 1.0]] * 5, [[1.0, -1.0]] * 5, [[-1.0, -1.0]] * 5, [[-1.0, 1.0]] * 5] | |
| ), | |
| ), | |
| ] | |
| ) | |
| @parameterized.expand([ | |
| (1, torch.tensor([[[1, 1]], [[1, -1]], [[-1, -1]], [[-1, 1]]]).float()), | |
| (5, torch.tensor([[[1, 1]]*5, [[1, -1]]*5, [[-1, -1]]*5, [[-1, 1]]*5]).float()) | |
| ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same. I'll leave it like it is, or you can commit this suggestion later.
3b81ea8 to
425db36
Compare
fix number of feature dimensions [docs] consistent notation release ignore files in aim testing different number of latent dimensions extend tests to more feature dimensions Update tests/test_dvae_winci2020.py Co-authored-by: Anahita Mansouri Bigvand <[email protected]> Update tests/test_dvae_winci2020.py Co-authored-by: Anahita Mansouri Bigvand <[email protected]> Update tests/test_dvae_winci2020.py Co-authored-by: Anahita Mansouri Bigvand <[email protected]>
425db36 to
0180929
Compare
|
LGTM @thisac @VolodyaCO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The encoder might not encode a datapoint x to a 1D spin string. In the case of autoencoders where vector quantisation layers are added, encoders might encode a datapoint x to a HxWxC array. In such a case, the default latent to discrete transformation would be wrong, as it expected only a single feature dimension, whereas in this case there are three such feature dimensions.
This PR fixes that by automatically finding the number of feature dimensions in the encoded data representation, and performs the gumbel softmax transformation accordingly, keeping the same number of feature dimensions.